πŸ“š node [[incompatibility_of_fairness_metrics|incompatibility of fairness metrics]]
Welcome! Nobody has contributed anything to 'incompatibility_of_fairness_metrics|incompatibility of fairness metrics' yet. You can:
  • Write something in the document below!
    • There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
  • Write to the Agora from social media.
    • If you follow Agora bot on a supported platform and include the wikilink [[incompatibility_of_fairness_metrics|incompatibility of fairness metrics]] in a post, the Agora will link it here and optionally integrate your writing.
  • Sign up as a full Agora user.
    • As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
β₯… related node [[incompatibility_of_fairness_metrics]]
β₯… node [[incompatibility_of_fairness_metrics]] pulled by Agora

incompatibility of fairness metrics

Go back to the [[AI Glossary]]

#fairness

The idea that some notions of fairness are mutually incompatible and cannot be satisfied simultaneously. As a result, there is no single universal metric for quantifying fairness that can be applied to all ML problems.

While this may seem discouraging, incompatibility of fairness metrics doesn’t imply that fairness efforts are fruitless. Instead, it suggests that fairness must be defined contextually for a given ML problem, with the goal of preventing harms specific to its use cases.

See "On the (im)possibility of fairness" for a more detailed discussion of this topic.

πŸ“– stoas
β₯± context